16 research outputs found

    Anomaly payload signature generation system based on efficient tokenization methodology

    Get PDF
    © 2018 Praise Worthy Prize S.r.l. All rights reserved. Signature-based intrusion detection systems are widely used as an efficient network security control. Unfortunately, security experts manually craft attack signatures after capturing and analyzing the exploit code. Therefore, those systems are only able to detect known attacks. In this paper, we propose a new automated and content-based signature generation system that generates anomaly profiles to detect new and previously unknown attacks and worms. The proposed system, denoted SCANS, uses a natural tokenization method that speeds up the signature generation process by producing a fewer number of substrings. In this system, we propose a new stop character technique that will help to overcome signatures’ substrings granularity limitations of the old stop word techniques. In addition, SCANS introduces an improved normalized binary detection model specifically tailored for attacks detection. Experimental testing using DARPA IDS dataset shows a 95% malicious packets detection rate for port 23, with specificity of 88.4% and 94.6% for ports 21 and 25, respectively

    Auto Signature Verification Using Line Projection Features Combined With Different Classifiers and Selection Methods

    Get PDF
    : Signature verification plays a role in the commercial, legal and financial fields. The signature continues to be one of the most preferred types of authentication for many documents such as checks, credit card transaction receipts, and other legal documents. In this study, we propose a system for validating handwritten bank check signatures to determine whether the signature is original or forged. The proposed system includes several steps including improving the signature image quality, noise reduction, feature extraction, and analysis. The extracted features depend on the signature line and projection features. To verify signatures, different classification methods are used. The system is then trained with a set of signatures to demonstrate the validity of the proposed signature verification system. The experimental results show that the best accuracy of 100% was obtained by combining several classification methods

    Comprehensive Performance Analysis of RPL Objective Functions in IoT Networks

    Get PDF
    As the movement for a vast implementation of IoT networks is rapidly accelerating, so many researchers are working to analyze the performance of RPL, the widely-used routing protocol for wireless sensor networks. The analysis usually involves a small number of metrics studied under a limited number of scenarios. In this paper however, we provide a comprehensive study for the performance of the two objective functions used in RPL; MRHOF and OF0, using the Cooja simulator in Contiki operating system. Using static-grid and mobile-random topologies with 25, 49, and 81 sender nodes including one sink node. Each topology was tested with three transmission ranges of 11, 20, and 50 meters to simulate sparse, moderate and dense networks. The selected metrics are convergence time, changes in DoDAG tree structures, average churn in the network, Average Power Consumption, Average Listen Duty Cycle, Average Transmit Duty Cycle, Average received packets, average lost packets, average duplicate packets, and average hop count. In fixed networks, the results show that OF0 usually perform better than MRHOF in terms of Energy Consumption, Convergence Time in the Static-Grid Topology, Listen Duty Cycle, and Transmit Duty Cycle

    Enhanced Epileptic Seizure diagnosis using EEG Signals with Support vector machine and Bagging Classifiers

    Get PDF
    Many approaches have been proposed using Electroencephalogram (EEG) to detect epilepsy seizures in their early stages. Epilepsy seizure is a severe neurological disease. Practitioners continue to rely on manual testing of EEG signals. Artificial intelligence (AI) and Machine Learning (ML) can effectively deal with this problem. ML can be used to classify EEG signals employing feature extraction techniques. This work focuses on automated detection for epilepsy seizures using ML techniques. Various algorithms are investigated, such as  Bagging, Decision Tree (DT), Adaboost, Support vector machine (SVM), K-nearest neighbors(KNN), Artificial neural network(ANN), Naïve Bayes, and Random Forest (RF) to distinguish injected signals from normal ones with high accuracy. In this work, 54 Discrete wavelet transforms (DWTs) are used for feature extraction, and the similarity distance is applied to identify the most powerful features. The features are then selected to form the features matrix. The matrix is subsequently used to train ML. The proposed approach is evaluated through different metrics such as F-measure, precision, accuracy, and Recall. The experimental results show that the SVM and Bagging classifiers in some data set combinations, outperforming all other classifier

    History-based consistency algorithm for the trickle-timer with low-power and lossy networks

    Get PDF
    Recently, the internet of things (IoT) has become an important concept which has changed the vision of the Internet with the appearance of IPv6 over low power and lossy networks (6LoWPAN). However, these 6LoWPANs have many drawbacks because of the use of many devices with limited resources; therefore, suitable protocols such as the Routing Protocol for low power and lossy networks (RPL) were developed, and one of RPL's main components is the trickle timer algorithm, used to control and maintain the routing traffic frequency caused by a set of control messages. However, the trickle timer suffered from the short-listen problem which was handled by adding the listen-only period mechanism. This addition increased the delay in propagating transmissions and resolving the inconsistency in the network. However, to solve this problem we proposed the history based consistency algorithm (HBC), which eliminates the listen-only period based on the consistency period of the network. The proposed algorithm showed very good results. We measured the performance of HBC trickle in terms of convergence time; which was mainly affected, the power consumption and the packet delivery ratio (PDR). We made a comparison between the original trickle timer, the E-Trickle, the optimized trickle and our HBC trickle algorithm. The PDR and the power consumption showed in some cases better results under the HBC trickle compared to other trickle timers and in other cases the results were very close to the original trickle indicating the efficiency of the proposed trickle in choosing optimal routes when sending messages

    Optimized Job Scheduling approach based on Genetic algorithms in smart Grid environment

    Get PDF
    The advances in communications and information technologies have been playing a major role in all aspects of our lives. One of those majors aspects that affect our daily lives is the power grids which lead to what we call Smart Grids. One of the major challenges in these grids is to optimize the consumption and resources. This paper presents an optimized job scheduling approach using genetic algorithm which provides a minimum cost for completing different tasks in a grid environment.  In grid environment different independent appliances are sharing the same resources depending on the availability of resources and the need of these appliances to run. There are different job scheduling approached starting from typical strategies, Ant Colony (AC) and Genetic Algorithm (GA). In this paper we present a cost optimized Genetic Algorithm approach for appliances job scheduling by considering different parameters like job duration time, the resources availability and the job priority to start. The proposed approach is tested using a simulator written in c++ programming language. The results show that the total saving in cost is better than the previous approaches

    On the optimality of route selection in grid wireless sensor networks: Theory and applications

    Get PDF
    © 2020, Innovative Information Science and Technology Research Group. All rights reserved. Wireless Sensor Networks (WSNs) provides the necessary infrastructure for the successful realiza-tion of emerging technological advancements such as smart places. Information, in WSN, is collected from the target locations using sensors, sensors can act as relay nodes for the successful delivery of the collected data to the base station. Energy is scarce in sensors, and usually, it cannot be renewed. To prolong the network overall lifetime, it is essential to prolong each sensors’ lifetime. Therefore, nodes placements and route selection are vital elements for WSNs, as it can significantly affect both the network performance and lifetime. Nodes in WSNs can be deployed in several ways: randomly or in an fixed manner. In this paper, we are concerned about the fixed deployment of sensors in a grid topology. In such topology, many possible routes exist between a source and a destination nodes. To reduce power consumption, it is important to find the optimal route. This paper sheds the light on the optimality of the route selection in 2x2 grid topology and presents some findings regarding this issue. The obtained optimal routes consider the power consumption factor. Some theoretical bounds were derived on the optimal number of relay nodes in a 2x2 grid. Finally, a preliminary heuristic approach is proposed, namely; Energy-Aware Routing (EAR), based on the findings obtained in this paper. The performance of the proposed heuristic is evaluated using simulation. Preliminary results show that the proposed scheme was able to prolong the network lifetime

    Dynamic framework to mining Internet of Things for multimedia services

    Get PDF
    © 2019 John Wiley & Sons, Ltd The rapid and unprecedented technological advancements are currently dominated by two technologies. At one hand, we witness the rise of the Internet of Things (IoT) as the next evolution of the Internet. At the other hand, we witness a vast spread of social networks that connects people together socially and opens the door for people to share and express ideas, thoughts, and information. IoT is overpopulated by a vast number of objects, millions of multimedia services, and interactions. Therefore, the search of the right object that can provide the specific multimedia service is considered as an important issue. The merge of these two technologies resulted in new paradigm called Social IoT (SIoT). The main idea in SIoT is that every object can mine IoT in search for certain multimedia service. We investigate the issue of friends\u27 management in SIoT and propose a framework to manage friends\u27 requests. The proposed framework employs several mechanisms to better manage friends\u27 relationships. The proposed framework consists of friend selection, friendship removal, and an update module. It proposes a weight-based algorithm and Naïve Bayes Classifier-based algorithm for the selection component. Moreover, a random service allocation model is proposed to construct service-specific network model. This model is then used in the simulation setup to examine the performance of different friends\u27 management algorithms. The performance of the proposed framework is evaluated using simulation under different scenarios. The obtained simulation results show improvement over other strategies in terms of average degree of connections, average path length, local cluster coefficients, and throughput

    Ubiquitous Computing and Communication Journal (ISSN 1992-8424) A NEW BACKOFF ALGORITHM FOR IEEE 802.11 DCF MAC PROTOCOL IN MOBILE AD HOC NETWORKS

    No full text
    Backoff algorithms are one class of collision resolution algorithms used in the medium access control protocol in mobile ad hoc networks. When there are different nodes competing to access a shared channel at the same time, the possibility of collision is highly probable, especially in high traffic load networks. Collision is considered as the major problem in wireless networks, so the backoff mechanism should be applied in order to decrease the collision and to achieve an efficient use of the shared channel. This paper aims to propose and evaluate a new backoff algorithm called “Dynamic Backoff Algorithm (DBA) ” which is a combination of two increment methods (exponential, linear) that applied for N phases and it also has one decrement method (linear)
    corecore